Goto

Collaborating Authors

 Starr County


Fast Low-parameter Video Activity Localization in Collaborative Learning Environments

Jatla, Venkatesh, Teeparthi, Sravani, Egala, Ugesh, Pattichis, Sylvia Celedon, Patticis, Marios S.

arXiv.org Artificial Intelligence

Research on video activity detection has primarily focused on identifying well-defined human activities in short video segments. The majority of the research on video activity recognition is focused on the development of large parameter systems that require training on large video datasets. This paper develops a low-parameter, modular system with rapid inferencing capabilities that can be trained entirely on limited datasets without requiring transfer learning from large-parameter systems. The system can accurately detect and associate specific activities with the students who perform the activities in real-life classroom videos. Additionally, the paper develops an interactive web-based application to visualize human activity maps over long real-life classroom videos.


VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs

Sun, Jiakai, Zhang, Zhanjie, Chen, Jiafu, Li, Guangyuan, Ji, Boyan, Zhao, Lei, Xing, Wei, Lin, Huaizhong

arXiv.org Artificial Intelligence

Neural Radiance Fields (NeRF) has shown great success in novel view synthesis due to its state-of-the-art quality and flexibility. However, NeRF requires dense input views (tens to hundreds) and a long training time (hours to days) for a single scene to generate high-fidelity images. Although using the voxel grids to represent the radiance field can significantly accelerate the optimization process, we observe that for sparse inputs, the voxel grids are more prone to overfitting to the training views and will have holes and floaters, which leads to artifacts. In this paper, we propose VGOS, an approach for fast (3-5 minutes) radiance field reconstruction from sparse inputs (3-10 views) to address these issues. To improve the performance of voxel-based radiance field in sparse input scenarios, we propose two methods: (a) We introduce an incremental voxel training strategy, which prevents overfitting by suppressing the optimization of peripheral voxels in the early stage of reconstruction. (b) We use several regularization techniques to smooth the voxels, which avoids degenerate solutions. Experiments demonstrate that VGOS achieves state-of-the-art performance for sparse inputs with super-fast convergence. Code will be available at https://github.com/SJoJoK/VGOS.


History Repeats: Overcoming Catastrophic Forgetting For Event-Centric Temporal Knowledge Graph Completion

Mirtaheri, Mehrnoosh, Rostami, Mohammad, Galstyan, Aram

arXiv.org Artificial Intelligence

Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in real-world scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used event-centric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy.


Institutional Foundations of Adaptive Planning: Exploration of Flood Planning in the Lower Rio Grande Valley, Texas, USA

Ross, Ashley D., Nejat, Ali, Greb, Virgie

arXiv.org Artificial Intelligence

INTRODUCTION Adaptive planning is ideally suited for the deep uncertainties presented by climate change. While there is a robust scholarship on the theory and methods of adaptive planning, this has largely neglected how adaptive planning is affected by existing planning institutions and how to move forward within the constraints of traditional planning organizations. This study asks: How do existing traditional planning institutions support adaptive planning? We explore this for flood planning in the Lower Rio Grande Valley of Texas, United States. We draw on county hazard plan and regional flood plan documents as well as transcripts of regional flood planning meetings to explore the emergent topics of these institutional outputs. Using Natural Language Processing to analyze this large amount of text, we find that hazard plans and discussions developing these plans are largely lacking an adaptive approach. KEYWORDS adaptive planning; uncertainty; flood plan; Rio Grande Valley INTRODUCTION Planning for natural hazard risk reduction in the context climate change involves decision making under conditions of interacting, multiple uncertainties. Some of these are "deep uncertainties" connected to long time horizons, nonlinear changes in climates and ecosystems, and inability to reliably quantify the rate and magnitude of climate changes (Babovic & Mijic, 2018; Bosomworth & Gaillard, 2019). Other uncertainties are associated with the ambiguities and unpredictability of socioeconomic systems, including population growth, land use change, social conflict, and the whims of political will (Babovic & Mijic 2019; Buurman & Babovic, 2014). In the face of these uncertainties, a new paradigm of decision making has emerged that emphasizes the development of adaptive plans and policies (Hassnoot et al., 2013; Walker et al., 2013). Traditional planning approaches typically generate a static optimal plan to reduce vulnerability to a single'most likely' future or to respond a wide range of plausible future scenarios (Haasnoot et al., 2013; Manocha & Babovic, 2018). Because the future is largely unknowable, static optimal plans are likely to fail and adaptations are made adhoc to adjust to emerging risk conditions (Haasnoot et al., 2013).


Ford plans $11 billion investment, 40 electrified...

Daily Mail - Science & tech

Ford will significantly increase its planned investments in electric vehicles to $11 billion by 2022 and have 40 hybrid and fully electric vehicles in its model lineup, Chairman Bill Ford said at the Detroit auto show. The investment figure is sharply higher than a previously announced target of $4.5 billion by 2020, Ford executives said, and includes the costs of developing dedicated electric vehicle architectures. Ford's engineering, research and development expenses for 2016, the last full year available, were $7.3 billion, up from $6.7 billion in 2015. Of the 40 electrified vehicles Ford plans for its global lineup by 2022, 16 will be fully electric and the rest will be plug-in hybrids, executives said. SUVs figured Ford's electric future The automaker's president of global markets, Jim Farley, said on Sunday that Ford would bring a high-performance electric utility vehicle to market by 2020.


Detroit automakers ink deals for self-driving cars

USATODAY - Tech Top Stories

Detroit automakers that viewed Silicon Valley as a "serious threat" just a few years ago are now jumping into deals, acquisitions and investments with West Coast tech companies, describing some of the partnerships as "getting married." The new relationships promise to bring the tech and automotive industries closer as they rush to develop self-driving cars, ride-sharing partnerships and to take advantage of other cutting-edge technology. The change from potential adversaries to partnerships illustrates a growing awareness that neither industry is likely to conquer the other anytime soon and that they need each other to evolve at the speed necessary to remain competitive. But like any successful marriage, the two parties must recognize their differences and figure out how to work more closely together, said Xavier Mosquet, a Detroit-based senior partner in the automotive practice of the Boston Consulting Group. These are industries that move at vastly different paces, operate in entirely different regulatory environments and come from different corporate cultures, he pointed out.


A General Statistic Framework for Genome-based Disease Risk Prediction

Ma, L., Lin, N., Amos, C. I., Xiong, M. M.

arXiv.org Machine Learning

Advances of modern sensing and sequencing technologies generate a deluge of high dimensional space-temporal physiological and next-generation sequencing (NGS) data. Physiological traits are observed either as continuous random functions, or on a dense grid and referred to as function-valued traits. Both physiological and NGS data are highly correlated data with their inherent order, spacing, and functional nature which are ignored by traditional summary-based univariate and multivariate regression methods designed for quantitative genetic analysis of scalar trait and common variants. To capture morphological and dynamic features of the data and utilize their dependent structure, we propose a functional linear model (FLM) in which a trait curve is modeled as a response function, the genetic variation in a genomic region or gene is modeled as a functional predictor, and the genetic effects are modeled as a function of both time and genomic position (FLMF) for genetic analysis of function-valued trait with both GWAS and NGS data. By extensive simulations, we demonstrate that the FLMF has the correct type 1 error rates and much higher power to detect association than the existing methods. The FLMF is applied to sleep data from Starr County health studies where oxygen saturation were measured in 22,670 seconds on average for 833 individuals. We found 65 genes that were significantly associated with oxygen saturation functional trait with P-values ranging from 2.40E-06 to 2.53E-21. The results clearly demonstrate that the FLMF substantially outperforms the traditional genetic models with scalar trait.


Viewing the History of Science as Compiled Hindsight

Darden, Lindley

AI Magazine

This article is a written version of an invited talk on artificial intelligence (AI) and the history of science that was presented at the Fifth National Conference on Artificial Intelligence (AAAI-86) in Philadelphia on 13 August 1986. Included is an expanded section on the concept of an abstraction in AI; this section responds to issues that were raised in the discussion which followed the oral presentation. The main point here is that the history of science can be used as a source for constructing abstract theory types to aid in solving recurring problem types. Two theory types that aid in forming hypotheses to solve adaptation problems are discussed: selection theories and instructive theories. Providing cases from which to construct theory types is one way in which to view the history of science as "complied hindsight" and might prove useful to those in AI concerned with scientific knowledge and reasoning.